Stochastic gradient line Bayesian optimization for efficient noise-robust optimization of parameterized quantum circuits
نویسندگان
چکیده
Optimizing parameterized quantum circuits is a key routine in using near-term devices. However, the existing algorithms for such optimization require an excessive number of quantum-measurement shots estimating expectation values observables and repeating many iterations, whose cost has been critical obstacle practical use. We develop efficient alternative algorithm, stochastic gradient line Bayesian (SGLBO), to address this problem. SGLBO reduces measurement-shot by appropriate direction updating circuit parameters based on descent (SGD) further utilizing (BO) estimate optimal step size each iteration SGD. In addition, we formulate adaptive strategy introduce technique suffix averaging reduce effect statistical hardware noise. Our numerical simulation demonstrates that augmented with these techniques can drastically cost, improve accuracy, make noise-robust.
منابع مشابه
Optimization of Quantum Cellular Automata Circuits by Genetic Algorithm
Quantum cellular automata (QCA) enables performing arithmetic and logic operations at the molecular scale. This nanotechnology promises high device density, low power consumption and high computational power. Unlike the CMOS technology where the ON and OFF states of the transistors represent binary information, in QCA, data is represented by the charge configuration. The primary and basic devic...
متن کاملAn Efficient Conjugate Gradient Algorithm for Unconstrained Optimization Problems
In this paper, an efficient conjugate gradient method for unconstrained optimization is introduced. Parameters of the method are obtained by solving an optimization problem, and using a variant of the modified secant condition. The new conjugate gradient parameter benefits from function information as well as gradient information in each iteration. The proposed method has global convergence und...
متن کاملStochastic Gradient Methods for Distributionally Robust Optimization with f-divergences
We develop efficient solution methods for a robust empirical risk minimization problem designed to give calibrated confidence intervals on performance and provide optimal tradeoffs between bias and variance. Our methods apply to distributionally robust optimization problems proposed by Ben-Tal et al., which put more weight on observations inducing high loss via a worst-case approach over a non-...
متن کاملEfficient Stochastic Gradient Descent for Strongly Convex Optimization
We motivate this study from a recent work on a stochastic gradient descent (SGD) method with only one projection (Mahdavi et al., 2012), which aims at alleviating the computational bottleneck of the standard SGD method in performing the projection at each iteration, and enjoys an O(log T/T ) convergence rate for strongly convex optimization. In this paper, we make further contributions along th...
متن کاملGradient-based stochastic optimization methods in Bayesian experimental design
Optimal experimental design (OED) seeks experiments expected to yield the most useful data for some purpose. In practical circumstances where experiments are time-consuming or resource-intensive, OED can yield enormous savings. We pursue OED for nonlinear systems from a Bayesian perspective, with the goal of choosing experiments that are optimal for parameter inference. Our objective in this co...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: npj Quantum Information
سال: 2022
ISSN: ['2056-6387']
DOI: https://doi.org/10.1038/s41534-022-00592-6